139 research outputs found

    Drive network to a desired orbit by pinning control

    Get PDF
    summary:The primary objective of the present paper is to develop an approach for analyzing pinning synchronization stability in a complex delayed dynamical network with directed coupling. Some simple yet generic criteria for pinning such coupled network are derived analytically. Compared with some existing works, the primary contribution is that the synchronization manifold could be chosen as a weighted average of all the nodes states in the network for the sake of practical control tactics, which displays the different influences and contributions of the various nodes in synchronization seeking processes of the dynamical network. Furthermore, it is shown that in order to drive a complex network to a desired synchronization state, the coupling strength should vary according to the controller. In addition, the theoretical results about the time-invariant network is extended to the time-varying network, and the result on synchronization problem can also be extended to the consensus problem of networked multi-agent systems. Subsequently, the theoretic results are illustrated by a typical scale-free (SF) neuronal network. Numerical simulations with three kinds of the homogenous solutions, including an equilibrium point, a periodic orbit, and a chaotic attractor, are finally given to demonstrate the effectiveness of the proposed control methodology

    GAMMA: Revisiting Template-based Automated Program Repair via Mask Prediction

    Full text link
    Automated program repair (APR) aims to fix software bugs without human intervention and template-based APR has been widely investigated with promising results. However, it is challenging for template-based APR to select the appropriate donor code, which is an important repair ingredient for generating candidate patches. Inappropriate donor code may cause plausible but incorrect patch generation even with correct fix patterns, limiting the repair performance. In this paper, we aim to revisit template-based APR, and propose GAMMA, to directly leverage large pre-trained language models for donor code generation. Our main insight is that instead of retrieving donor code in the local buggy file, we can directly predict the correct code tokens based on the context code snippets and repair patterns by a cloze task. Specifically, (1) GAMMA revises a variety of fix templates from state-of-the-art template-based APR techniques (i.e., TBar) and transforms them into mask patterns. (2) GAMMA adopts a pre-trained language model to predict the correct code for masked code as a fill-in-the-blank task. The experimental results demonstrate that GAMMA correctly repairs 82 bugs on Defects4J-v1.2, which achieves 20.59\% (14 bugs) and 26.15\% (17 bugs) improvement over the previous state-of-the-art template-based approach TBar and learning-based one Recoder. Furthermore, GAMMA repairs 45 bugs and 22 bugs from the additional Defects4J-v2.0 and QuixBugs, indicating the generalizability of GAMMA in addressing the dataset overfitting issue. We also prove that adopting other pre-trained language models can provide substantial advancement, e.g., CodeBERT-based and ChatGPT-based GAMMA is able to fix 80 and 67 bugs on Defects4J-v1.2, indicating the scalability of GAMMA. Overall, our study highlights the promising future of adopting pre-trained models to generate correct patches on top of fix patterns.Comment: Accepted to 38th IEEE/ACM International Conference on Automated Software Engineering (ASE2023

    PRINCIPLES OF THE SPLASH CONTROL TECHNIQUE IN DIVING

    Get PDF
    INTRODUCTION: "Splash control" is a key element of water entry technique in competitive diving. This process starts from the initial contact of a diver's body with the water surface until complete entry of the rest of the body into the water. The purpose of this study was to establish the most effective hand pattern and body posture that can achieve the best “splash control” to minimize water splash

    A Survey of Learning-based Automated Program Repair

    Full text link
    Automated program repair (APR) aims to fix software bugs automatically and plays a crucial role in software development and maintenance. With the recent advances in deep learning (DL), an increasing number of APR techniques have been proposed to leverage neural networks to learn bug-fixing patterns from massive open-source code repositories. Such learning-based techniques usually treat APR as a neural machine translation (NMT) task, where buggy code snippets (i.e., source language) are translated into fixed code snippets (i.e., target language) automatically. Benefiting from the powerful capability of DL to learn hidden relationships from previous bug-fixing datasets, learning-based APR techniques have achieved remarkable performance. In this paper, we provide a systematic survey to summarize the current state-of-the-art research in the learning-based APR community. We illustrate the general workflow of learning-based APR techniques and detail the crucial components, including fault localization, patch generation, patch ranking, patch validation, and patch correctness phases. We then discuss the widely-adopted datasets and evaluation metrics and outline existing empirical studies. We discuss several critical aspects of learning-based APR techniques, such as repair domains, industrial deployment, and the open science issue. We highlight several practical guidelines on applying DL techniques for future APR studies, such as exploring explainable patch generation and utilizing code features. Overall, our paper can help researchers gain a comprehensive understanding about the achievements of the existing learning-based APR techniques and promote the practical application of these techniques. Our artifacts are publicly available at \url{https://github.com/QuanjunZhang/AwesomeLearningAPR}

    A Critical Review of Large Language Model on Software Engineering: An Example from ChatGPT and Automated Program Repair

    Full text link
    Large Language Models (LLMs) have been gaining increasing attention and demonstrated promising performance across a variety of Software Engineering (SE) tasks, such as Automated Program Repair (APR), code summarization, and code completion. For example, ChatGPT, the latest black-box LLM, has been investigated by numerous recent research studies and has shown impressive performance in various tasks. However, there exists a potential risk of data leakage since these LLMs are usually close-sourced with unknown specific training details, e.g., pre-training datasets. In this paper, we seek to review the bug-fixing capabilities of ChatGPT on a clean APR benchmark with different research objectives. We first introduce {\benchmark}, a new benchmark with buggy and the corresponding fixed programs from competitive programming problems starting from 2023, after the training cutoff point of ChatGPT. The results on {\benchmark} show that ChatGPT is able to fix 109 out of 151 buggy programs using the basic prompt within 35 independent rounds, outperforming state-of-the-art LLMs CodeT5 and PLBART by 27.5\% and 62.4\% prediction accuracy. We also investigate the impact of three types of prompts, i.e., problem description, error feedback, and bug localization, leading to additional 34 fixed bugs. Besides, we provide additional discussion from the interactive nature of ChatGPT to illustrate the capacity of a dialog-based repair workflow with 9 additional fixed bugs. Inspired by the findings, we further pinpoint various challenges and opportunities for advanced SE study equipped with such LLMs (e.g.,~ChatGPT) in the near future. More importantly, our work calls for more research on the reevaluation of the achievements obtained by existing black-box LLMs across various SE tasks, not limited to ChatGPT on APR

    Backdooring Neural Code Search

    Full text link
    Reusing off-the-shelf code snippets from online repositories is a common practice, which significantly enhances the productivity of software developers. To find desired code snippets, developers resort to code search engines through natural language queries. Neural code search models are hence behind many such engines. These models are based on deep learning and gain substantial attention due to their impressive performance. However, the security aspect of these models is rarely studied. Particularly, an adversary can inject a backdoor in neural code search models, which return buggy or even vulnerable code with security/privacy issues. This may impact the downstream software (e.g., stock trading systems and autonomous driving) and cause financial loss and/or life-threatening incidents. In this paper, we demonstrate such attacks are feasible and can be quite stealthy. By simply modifying one variable/function name, the attacker can make buggy/vulnerable code rank in the top 11%. Our attack BADCODE features a special trigger generation and injection procedure, making the attack more effective and stealthy. The evaluation is conducted on two neural code search models and the results show our attack outperforms baselines by 60%. Our user study demonstrates that our attack is more stealthy than the baseline by two times based on the F1 score

    A Survey of Source Code Search: A 3-Dimensional Perspective

    Full text link
    (Source) code search is widely concerned by software engineering researchers because it can improve the productivity and quality of software development. Given a functionality requirement usually described in a natural language sentence, a code search system can retrieve code snippets that satisfy the requirement from a large-scale code corpus, e.g., GitHub. To realize effective and efficient code search, many techniques have been proposed successively. These techniques improve code search performance mainly by optimizing three core components, including query understanding component, code understanding component, and query-code matching component. In this paper, we provide a 3-dimensional perspective survey for code search. Specifically, we categorize existing code search studies into query-end optimization techniques, code-end optimization techniques, and match-end optimization techniques according to the specific components they optimize. Considering that each end can be optimized independently and contributes to the code search performance, we treat each end as a dimension. Therefore, this survey is 3-dimensional in nature, and it provides a comprehensive summary of each dimension in detail. To understand the research trends of the three dimensions in existing code search studies, we systematically review 68 relevant literatures. Different from existing code search surveys that only focus on the query end or code end or introduce various aspects shallowly (including codebase, evaluation metrics, modeling technique, etc.), our survey provides a more nuanced analysis and review of the evolution and development of the underlying techniques used in the three ends. Based on a systematic review and summary of existing work, we outline several open challenges and opportunities at the three ends that remain to be addressed in future work.Comment: submitted to ACM Transactions on Software Engineering and Methodolog

    Drugs for the Treatment of Muscle Atrophy

    Get PDF
    Muscle mass is maintained through an interplay between anabolic and catabolic pathways. The ubiquitin-proteasome system plays an important role in the proteolysis progress during skeletal muscle atrophy which can be blocked by some proteasome inhibitors. But few studies have demonstrated the ability of these inhibitors to preserve muscle mass and architecture under catabolic condition in vivo. The insulin-like growth factor-1/phosphatidylinositide 3-kinases/protein kinase B/mammalian target of rapamycin (IGF-1/PI3K/Akt/mTOR) pathway was associated with anabolic pathways. The activation of IGF-1 causes muscle hypertrophy; however, it cannot be used as a drug target. Myostatin pathway maintains activation that can induce skeletal muscle atrophy involved with various transcriptional and genetic factors. Skeletal muscle atrophy is a debilitating consequence of multiple chronic diseases and conditions that involve starvation. It reduces treatment options and positive clinical outcomes as well as compromising quality of life and increasing morbidity and mortality. Though considerable research has been undertaken to find the drug target and the molecular mechanisms that improve skeletal muscle atrophy, no drug was approved to treat skeletal muscle atrophy. However, these years, the signaling pathways involved in muscle atrophy were clarified and some effective treatments were currently available to prevent, attenuate, or reverse muscle atrophy for experiment research

    An empirical comparison of fixed-strength and mixed-strength for interaction coverage based prioritization

    Get PDF
    Test case prioritization (TCP) plays an important role in identifying, characterizing, diagnosing and correcting faults quickly. TCP has been widely used to order test cases of different types, including model inputs (also called abstract test cases). Model inputs are constructed by modeling the program according to its input parameters, values, and constraints, and has been used in different testing methods, such as combinatorial interaction testing, and software product line testing. Interaction coveragebased test case prioritization (ICTCP) uses interaction coverage information derived from the model input to order inputs. Previous studies have focused generally on the fixed-strength ICTCP, which adopts a fixed strength(i.e.,thelevelofparameterinteractions)tosupporttheICTCPprocess.Itisgenerallyacceptedthat using more strengths for ICTCP, i.e., mixed-strength ICTCP, may give better ordering than fixed-strength. To confirm whether mixed-strength is better than fixed-strength, in this paper we report on an extensive empirical study using five real-world programs (written in C), each of which has six versions. The results oftheempiricalstudiesshowthatmixed-strengthhasbetterratesofinteractioncoverageoverallthanfixedstrength, but they have very similar rates of fault detection. Our results also show that fixed-strength should be used instead of the mixed-strength at the later stage of software testing. Finally, we offer some practical guidelinesfortesterswhenusinginteractioncoverageinformationtoprioritizemodelinputs,underdifferent testing scenarios and resources
    corecore